Goto

Collaborating Authors

 fast structured covariance approximation


SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient

Neural Information Processing Systems

Uncertainty estimation in large deep-learning models is a computationally challenging task, where it is difficult to form even a Gaussian approximation to the posterior distribution. In such situations, existing methods usually resort to a diagonal approximation of the covariance matrix despite the fact that these matrices are known to give poor uncertainty estimates. To address this issue, we propose a new stochastic, low-rank, approximate natural-gradient (SLANG) method for variational inference in large deep models. Our method estimates a "diagonal plus low-rank" structure based solely on back-propagated gradients of the network log-likelihood. This requires strictly less gradient computations than methods that compute the gradient of the whole variational objective. Empirical evaluations on standard benchmarks confirm that SLANG enables faster and more accurate estimation of uncertainty than mean-field methods, and performs comparably to state-of-the-art methods.


Reviews: SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient

Neural Information Processing Systems

UPDATE Thank you for your rebuttal. The traditional approach is to use diagonal Gaussian posterior approximations. This paper proposes Gaussian posterior approximations where the covariance matrix is approximated using the sum of a diagonal matrix and a low-rank matrix. The paper then outlines an efficient algorithm (with complexity linear in the number of dimensions of the data) that solely depends on gradients of the log likelihood. This is made possible by an approximation of the Hessian that depends on the gradient instead of the second derivatives of the log likelihood.


SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient

Mishkin, Aaron, Kunstner, Frederik, Nielsen, Didrik, Schmidt, Mark, Khan, Mohammad Emtiyaz

Neural Information Processing Systems

Uncertainty estimation in large deep-learning models is a computationally challenging task, where it is difficult to form even a Gaussian approximation to the posterior distribution. In such situations, existing methods usually resort to a diagonal approximation of the covariance matrix despite the fact that these matrices are known to give poor uncertainty estimates. To address this issue, we propose a new stochastic, low-rank, approximate natural-gradient (SLANG) method for variational inference in large deep models. Our method estimates a "diagonal plus low-rank" structure based solely on back-propagated gradients of the network log-likelihood. This requires strictly less gradient computations than methods that compute the gradient of the whole variational objective.